-
Notifications
You must be signed in to change notification settings - Fork 9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(benchmark): Add scenario for expressions with Set node #10647
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of the Set node, can we instead use something that's a bit more I/O bound?
From benchmarking perspective I find Set node too simplistic to provide any meaningful data that a benchmark without this node would not.
Any ideas what that might be? The idea is to test how fast we can evaluate bunch of expressions. I'm sure there will be bigger overhead here once expression evaluation becomes async with the introduction of agents. |
IMO stuff like this is important,but we need to benchmark stuff like this in isolation. |
n8n Run #6721
Run Properties:
|
Project |
n8n
|
Branch Review |
cat-89-add-benchmark-scenario-for-expressions
|
Run status |
Passed #6721
|
Run duration | 04m 48s |
Commit |
9130293600: 🌳 🖥️ browsers:node18.12.0-chrome107 🤖 tomi 🗃️ e2e/*
|
Committer | Tomi Turtiainen |
View all properties for this run ↗︎ |
Test results | |
---|---|
Failures |
0
|
Flaky |
1
|
Pending |
0
|
Skipped |
0
|
Passing |
423
|
View all changes introduced in this branch ↗︎ |
✅ All Cypress E2E specs passed |
Fair point. We are not really interested in raw numbers, but rather how the performance changes over time. So having at least one scenario that includes expressions is nice to have, even tho that might not be the ideal way to test it. We can always revisit this if needed. |
* master: refactor(RabbitMQ Trigger Node): Improve type-safety, add tests, and fix issues with manual triggers (#10663) feat(editor): Add support for nodes with multiple main inputs in new canvas (no-changelog) (#10659) fix(editor): Set minimum zoom to 0 to allow fitting very large workflows in new canvas (no-changelog) (#10666) feat(editor): Change selection to be default canvas behaviour (no-changelog) (#10668) feat: More hints to nodes (#10565) fix(editor): Fix opening executions tab from a new, unsaved workflow (#10652) fix(AI Agent Node): Fix tools agent when using memory and Anthropic models (#10513) feat(editor): Make highlighted data pane floating (#10638) fix(editor): Fix workflow loading after switching to executions view in new canvas (no-changelog) (#10655) refactor(benchmark): Separate cloud env provisioning from running benchmarks (#10657) feat(core): Implement wrapping of regular nodes as AI Tools (#10641) refactor(editor): Remove Trial logic in personalization modal and port to script setup (#10649) fix(core): Declutter webhook insertion errors (#10650) feat: Reintroduce collaboration feature (#10602) feat(benchmark): Add scenario for expressions with Set node (#10647) feat(benchmark): Add benchmark scenario for binary files (#10648) build: Add `reset` script (#10627) feat(editor): Overhaul node insert position computation in new canvas (no-changelog) (#10637)
Got released with |
Summary
Add scenario for expressions with Set node
Related Linear tickets, Github issues, and Community forum posts
CAT-89
Review / Merge checklist
release/backport
(if the PR is an urgent fix that needs to be backported)